Almost 30 years ago, Jule Charney made the first modern estimate of the range of climate sensitivity to a doubling of CO2. He took the average from two climate models (2ºC from Suki Manabe at GFDL, 4ºC from Jim Hansen at GISS) to get a mean of 3ºC, added half a degree on either side for the error and produced the canonical 1.5-4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report. Admittedly, this was not the most sophisticated calculation ever, but individual analyses based on various approaches have not generally been able to improve substantially on this rough estimate, and indeed, have often suggested that quite high numbers (>6ºC) were difficult to completely rule out. However, a new paper in GRL this week by Annan and Hargreaves combines a number of these independent estimates to come up with the strong statement that the most likely value is about 2.9ºC with a 95% probability that the value is less than 4.5ºC.
Before I get into what the new paper actually shows, a brief digresssion…
We have discussed climate sensitivity frequently in previous posts and we have often referred to the constraints on its range that can be derived from paleo-climates, particularly the last glacial maximum (LGM). I was recently asked to explain why we can use the paleo-climate record this way when it is clear that the greenhouse gas changes (and ice sheets and vegetation) in the past were feedbacks to the orbital forcing rather than imposed forcings. This could seem a bit confusing.
First, it probably needs to be made clearer that generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales – the atmosphere and upper ocean on multi-decadal periods. This corresponds in scope (not un-coincidentally) to the atmospheric component of General Circulation Models (GCMs) coupled to (at least) a mixed-layer ocean. For this subsystem, many of the longer term feedbacks in the full climate system (such as ice sheets, vegetation response, the carbon cycle) and some of the shorter term bio-geophysical feedbacks (methane, dust and other aerosols) are explicitly excluded. Changes in these excluded feaures are therefore regarded as external forcings.
Why this subsystem? Well, historically it was the first configuration in which projections of climate change in the future could be usefully made. More importantly, this system has the very nice property that the global mean of instantaneous forcing calculations (the difference in the radiation fluxes at the tropopause when you change greenhouse gases or aerosols or whatever) are a very good predictor for the eventual global mean response. It is this empirical property that makes radiative forcing and climate sensitivity such useful concepts. For instance, this allows us to compare the global effects of very different forcings in a consistent manner, without having to run the model to equilibirum every time.
To see why a more expansive system may not be as useful, we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation driven by orbital changes. However, in the global mean, these changes sum to zero (or very close to it), and so the global mean sensitivity to global mean forcings is huge (or even undefined) and not very useful to understanding the eventual ice sheet growth or carbon cycle feedbacks. The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed.
So in order to constrain the climate sensitivity from the paleo-data, we need to find a period under which our restricted subsystem is stable – i.e. all the boundary conditions are relatively constant, and the climate itself is stable over a long enough period that we can assume that the radiation is pretty much balanced. The last glacial maximum (LGM) fits this restriction very well, and so is frequently used as a constraint. From at least Lorius et al (1991) – when we first had reasonable estimates of the greenhouse gases from the ice cores, to an upcoming paper by Schneider von Deimling et al, where they test a multi-model ensemble (1000 members) against LGM data to conclude that models with sensitivities greater than about 4.3ºC can’t match the data. In posts here, I too have used the LGM constraint here to demonstrate why extremely low (< 1ºC) or extremely high (> 6ºC) sensitivities can probably be ruled out.
In essence, I was using my informed prior beliefs to assess the likelihood of a new claim that climate sensitivity could be really high or low. My understanding of the paleo-climate record implied (to me) that the wide spread of results from (for instance, the first reports from the climateprediction.net experiment) were a function of their methodology but not a possible feature of the real world. Specifically, if one test has a stronger constraint than another, it’s natural to prefer the stronger constraint, or in other words, an experiment that produces looser constraints doesn’t make previous experiments that produced stronger constraints invalid. This is an example of ‘Bayesian inference‘. A nice description of how Bayesian thinking is generally applied is available at James Annan’s blog (here and here).
Of course, my application of Bayesian thinking was rather informal, and anything that can be done in such an arm waving way is probably better done in a formal way since you get much better control on the uncertainties. This is exactly what Annan and Hargreaves have done. Bayes theorem provides a simple formula for calculating how much each new bit of information improves (or not) your prior estimates and this can be applied to the uncertain distribution of climate sensitivity.
A+H combine three independently determined constraints using Bayes Theorem and come up with a new distribution that is the most likely given the different pieces of information. Specifically they take constraints from the 20th Century (1 to 10ºC), the constraints from responses to volcanic eruptions (1.5 to 6ºC) and the LGM data (-0.6 to 6.1ºC – a widened range to account for extra paleo-climatic uncertainties) to come to a formal Bayesian conclusion that is much tighter than each of the individual estimates. They find that the mean value is close to 3ºC, and with 95% limits at 1.7ºC and 4.9ºC, and a high probability that sensitivity is less than 4.5ºC. Unsurprisingly, it is the LGM data that makes very large sensitivities extremely unlikely. The paper is very clearly written and well worth reading for more of the details.
The mathematics therefore demonstrates what the scientists basically thought all along. Plus ça change indeed…
Steve Latham says
I could probably look this up, but what is 2x CO2? I mean, is that 2x the historical level or 2x the present level (or 2x some other level)? Why not simply refer to the actual ppm? Further, why are we stuck on 2x? If we don’t have to wait for equilibration every time, surely it can’t be too onerous to plot out the sensitivity from present levels to, say, 3x? Cognitive psychologists tell us that we deal better with discrete entities and integers, but the world (and particularly the world of probability) is more continuous… my soapbox argument against limiting discussion to values that are ‘convenient’.
[Response: It turns out not to matter (which is why it rarely gets mentioned). The forcing from CO2 is logarithmic at the concentrations we are discussing (~5.3 log(CO2/CO2_orig) ). That means that any doubling (from 1x pre-industrial to 2x pre-industrial, or 1x present to 2x present) gives roughly the same forcing. Specifically, 280 to 560 ppm, or 380 to 760ppm are equivalent. 3xCO2 gives ~60% more warming than 2xCO2. It’s always easier if people stick to a standard measure, and for good or bad we are stuck with 2xCO2 as the reference. – gavin]
Tom Huntington says
According to CDIAC’s WWW site the average atmospheric CO2 mixing ratio at Mauna Loa was 374 ppm in 1976. Are you saying that Jule Charney’s CO2 doubling (to 748) gives comparable climate sensitivity to Annan and Hargreaves modern GCMs for a doubling to 748? or 2-by todays CO2 of 375 that would be 750? I think that we need to be more explicit when talking about doubling CO2. If sustained, the recently reported accelerating rate of increase in atmospheric CO2 indicates we are likely to acheive that projected 3 degree C warming earlier than once thought.
I guess I am surprised that with better understanding of the importance of water vapor feedback, sulfate aerosols, black carbon aerosols, more rapid than expected declines in sea ice and attendant decreases in albedo, effects of the deposition of soot and dust on snow and ice decreasing albedo, and a recognition of the importance of GHGs that were probably not considered 30 years ago, that the sensitivity has changed so little over time.
[Response: We have to distinguish the intrinsic sensitivity of the climate to a forcing (such as 2xCO2) from the actual transient response to a whole suite of forcings. While we generally use 2xCO2 as the standard for the intrinsic sensitivity, we could have as easily used 1% increase in total solar irradiance, or 2x CH4 or something. The answer in deg C/ (W/m2) would be very similar. The aerosols don’t affect the intrinsic sensitivity, but they are very important for the transient solutions, as are the other GHGs and volcanoes and solar etc. -gavin]
Coby says
I am a little unclear as to what exactly is included in terms of feedbacks. I would assume that a prediction of 3oC warming for 2x CO2 includes at least H2O feedbacks. Maybe it includes ice albedo feedbacks. It probably does not include methane from melting permafrost. Or I suppose it takes the view that it does not matter where the GHG’s come from, such feedbacks of CO2 or methane only mean we get to 2x CO2 faster.
In short is it the conclusion that if one snapped one’s fingers and doubled CO2, at the end of a few decades it would be 3oC+/-? What about the longer timeframe for ice sheet response and CO2 ouitgassing from the oceans etc?
pat neuman says
Coby,
I have concerns similar to those you expressed in 3. I recently reviewed an article presented at the 2006 AAAS symposium in St. Louis, MO. It seems that Mark Chandler, an atmospheric scientist at Columbia University, has similar concerns. He discussed a warming episode about 3 million years ago, in the middle Pliocene.
Excerpts:
“Ocean temperatures rose substantially during that warming episode – as much as 7 to 9 degrees Celsius (about 12 to 16 degrees Fahrenheit) in some areas of the North Atlantic. But scientists are puzzled. The carbon dioxide levels at that time “inferred from geochemical data” were roughly comparable to our own time, approaching 400 parts per million. Today’s computer models do not predict the sort of temperature rises that occurred during the middle Pliocene, Chandler said.” …
“You have to take some warning from the Pliocene,” he said. Even in the absence of huge amounts of carbon dioxide as a forcing mechanism, he said, there still appear to be trigger points that, once passed, can produce rapid warming through feedbacks such as changes in sea ice and the reflectivity of the Earth’s surface. … Earl Lane
Modern Lessons from Ancient Greenhouse Emissions
http://www.aaas.org/news/releases/2006/0216greenhouse.shtml
—
http://groups.yahoo.com/group/ClimateArchive/
—
Steve Latham says
Wouldn’t it be important to compare where the Earth was with respect to Milankovich cycles and other forcings at the time?
Hank Roberts says
Pliocene models, anyone?
I’d love to know what they did take into account in attempting to model that period — must include astronomical location, sun’s behavior, best estimates about a lot of different conditions — where the continents were, what the ocean circulation was doing, whether there had been a recent geological period that laid down a lot of methane hydrates available to be tipped by Pliocene warming into bubbling out rapidly. Maybe whether a prior era of subduction put a lot of carbon down into the magma for volcanos to belch out? The latter two I don’t think we can model yet. But I don’t know.
I’m sure someone’s addressed these in trying to model that period. Maybe our hosts can invite an author in?
And those models didn’t come up with the actual rapid warming that happened. Missed something, but what?
Caution being — we know it _did_ happen, then, so we know it _can_ happen, so it’s conservative (overly so) to assume that what did happen won’t happen.
Put another way — do other modelers include “whatever caused Pliocene rapid warming happening again” added to the upside error range” for other models?
Roger Pielke, Jr. says
Recommended reading:
J.P. van der Sluijs, J.C.M. van Eijndhoven, B. Wynne, and S. Shackley, Anchoring Devices in Science For Policy: The Case of Consensus Around Climate Sensitivity, Social Studies of Science, vol 28, 2, April 1998, p. 291-323.
http://www.chem.uu.nl/nws/www/general/personal/sluijs_a.htm
Abstract:
Abstract
This paper adds a new dimension to the role of scientific knowledge in policy by emphasizing the multivalent character of scientific consensus. We show how the maintained consensus about the quantitative estimate of a central scientific concept in the anthropogenic climate-change field – namely, climate sensitivity – operates as an ‘anchoring device’ in ‘science for policy’. In international assessments of the climate issue, the consensus-estimate of 1.5 degrees C to 4.5 degrees C for climate sensitivity has remained unchanged for two decades. Nevertheless, during these years climate scientific knowledge and analysis have changed dramatically. We identify several ways in which the scientists achieved flexibility in maintaining the same numbers for climate sensitivity while accommodating changing scientific ideas. We propose that the remarkable quantitative stability of the climate sensitivity range has helped to hold together a variety of different social worlds relating to climate change, by continually translating and adapting the meaning of the ‘stable’ range. But this emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows ‘the same’ concept to accommodate tacitly different local meanings. Thus the very multidimensionality of such scientific concepts is part of their technical imprecision (which is more than just analytical lack of resolution); it is also the source of their resilience and value in bridging (and perhaps reorganizing) the differentiated social worlds typical of modern policy issues. The varying importance of particular dimensions of knowledge for different social groups may allow cohesion to be sustained amidst pluralism, and universality to coexist with cultural distinctiveness.
Graham Jackson says
“The forcing from CO2 is logarithmic at the concentrations we are discussing (~5.3 log(CO2/CO2_orig).”
I’ve attemted to derive a formula for this. Is there a referernce available?
[Response: IPCC TAR (or Myhre et al, 1998) -gavin]
Blair Dowden says
Are there some equivalent formulae for estimating the radiative forcing caused by water vapor feedback?
Georg Hoffmann says
#7 Roger, are you sure the article is not a repetition of Sokal’s joke on “postmodern speach”? I mean are you sure that the following sentences were not created by a computer program fed with some key words?
“But this emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows ‘the same’ concept to accommodate tacitly different local meanings” or
“The varying importance of particular dimensions of knowledge for different social groups may allow cohesion to be sustained amidst pluralism, and universality to coexist with cultural distinctiveness”.
HELP!!!
[Response: Indeed it reads like Sokal’s famous hoax. Climatologists would have dearly loved to narrow the uncertainty range of climate sensitivity, but until recently there has been not enough solid evidence to justify this. Neither is there reason to shift the range – most models (including large ensembles with systematic variation of uncertain paramaters) give values smack in the middle of the traditional range, near 3 ºC. Finally, there is no good reason to widen the range, even though some studies have pointed to the possibility of higher climate sensitivity – but as we have discussed here, they did not provide positive evidence for a higher climate sensitivity, they merely showed that the data constraints used were weak. No doubt sociologists will have their theories on this, but my personal perception as a climatologist having worked on climate sensitivity is: the range has not changed over a long time simply because there was no clear physical evidence that would have justified such a change. -stefan]
pete best says
So how do we know that 2x pre industrial or present (280 or 375 ppm respectively) leads to a 1.5 to 4.5 Deg C rise in temperature. I mean have we a precedent in the available records on climate to know this for certain ?
I have read for instance that by 2040 the Amazon will be drying out and has the potential by 2100 to have released some 35 Gigatonnes of additional CO2 into th atmosphere pushing up ppm levels to around 1000 ppm.
cp says
Why is CO2 more important than, say, deforestation or forest fires (as a “plant removal”, not CO2 producer)as far as climate changes are concerned?
[Response: Errm, well, its all a matter of putting greenhouse gases into the atmosphere. Most of the CO2 comes from fossil fuel burning; about 1/6 from deforestation I think – William]
Hank Roberts says
Following my own #6, I reread Gavin’s original article a few more times and find it very helpful in understanding what hasn’t been included in models (methane release for example). But that makes me suspect the sociology article can’t be about scientists directly involved in doing climate modeling — the modelers have to be explicit about the exact meaning of each concept taken into account, to be able to do their math.
If the sociologist is describing people outside the field — reading and interpreting the scientific publications — then it’s talking about politics (interpretations of reality, making alliances with people who disagree) — which makes more sense.
Were modelers specifically included or excluded, in that article? I suppose there are few enough modelers in the world that they will have talked about whether it’s describing them (and whether they were interviewed, for that matter).
Just speculating here, obviously.
Michael Tobis says
I agree with Georg’s (# 10) discomfort with Roger’s posting # 7.
Understanding the social and cognitive components of science is certainly important, but the abstract reads as if the possibility that we are discussing estimates of an objective quantity with an actual quantitative value is a matter of complete irrelevance.
Some people don’t believe in objective reality, but it’s hard to see why one should refer to them in rational conversation.
David B. Benson says
“Social Studies of Science” was started in 1970 and claims to be the leading international journal devoted to studies of the relation between society and science. But sociologists often write as unclearly as the abstract indicates. I recommend ignoring this journal, these authors and this paper as irrelevant to climatology. The abstract suggests the authors are interested in a different topic and just happened to use climatogists as subjects.
Steve Latham says
I don’t know, guys — I thought #7 wasn’t so bad, although I’m not fond of the abstract’s wordiness. RC is about conveying the science of global warming to the public. There are also many other outlets, including the mainstream media, trying to convey aspects of global warming to the public. Reading about how some researchers evaluate the communications among different groups may be insightful. I admit, though, that I haven’t rushed to read the paper because I don’t think the abstract says anything very surprising, it just says it in a dense way. In population genetics we have our own sort of benchmark comparisons, too, like effective population size and comparisons to one migrant per generation. Those concepts actually get abused (in the opinion of some people anyway) by people who want to oversimplify to make points; but without the abuse of the concepts these groups of people might not interact at all. I’m not sure which situation would be better.
Armand MacMurray says
Re:#9
Unfortunately, since e.g. even the sign of cloud effects is not known, this is one of the major aspects of the climate system that still needs to be sorted out.
pat neuman says
re 16. I think we’d be further ahead if more people read this (below) instead of that (7).
Fiddling As The Earth Burns
By Captain Paul Watson
25 March, 2006
http://www.countercurrents.org/cc-watson250306.htm
For example, … “In fact anytime anyone actually does something physically, they can expect to be condemned. The conservation movement seems to only value action on paper.”
Gar Lipow says
Question: I had the impression that recent results implied we were hitting a tipping point faster than we thought. I take it climate sensitivity is not the sole deteriminent for when a particular level of CO2 equivalent takes us past a tipping point.
Or to put it another way: Assume sensitivity is actually 2.9? How good news is this in terms of how much we have to cut how soon?
[Response: I’m not sure which “recent results” you are referring to. Certaintly there has been lots of discussion in the media and in the scientific literature about the possiblity of climate “tipping points”. However, it is all pretty hypothetical, and there is no known particular level of CO2 that puts us past a tipping point, if such a thing really exists. Indeed, one of the findings in the recent paper by Overpeck et al. (this weeks Science), is that even as the Greenland ice sheet melts faster than originally expected, it still won’t provide sufficient meltwater forcing of the North Atlantic circulation (which is the feature of the climate system most commonly implicated in the discussion of “tipping points”) to force any sort of threshold change. We’ll no doubt do a more extensive post on this at some point. As for good news…I suppose it is good news that the most extreme climate sensitivity estimates are likely to be wrong. But the basic finding that we have been right all along is not particularly good grounds for, say, reneging on Kyoto. –eric]
[Response: Concerning the North Atlantic circulation: I’m not sure whether Peck’s article supports the conclusion that Eric draws. Nobody knows for sure how fast Greenland would melt, and Overpeck does not provide the answer to that either. The simple rule of thumb is: melting Greenland during 1,000 years will provide an average freshwater inflow of 0.1 Sv (1 Sverdrup is one million cubic meters per second). Then, nobody knows how much freshwater it takes to shut down the Atlantic circulation. A recent model intercomparison concluded:
One needs to remember that Greenland is not the only source of freshwater, and that warming itself also reduces density. -stefan]
Gar Lipow says
I did some research and think I answered part of my own question. To some extent this could mean that we are heading for the middle scenario – that is the response to a given amound of carbon equivalent will be pretty close to the what has been the mainstream middle prediction all along. The one variable (aside from the fact the 2.9 is still the middle you can still go above or below) is the type of forcing as far as feedbacks. If for example the recent increase in speed of glacial melting is not noise but a real trend, icecap melting tends to provide faster feedback than other forcings thus will give worse results. So in that case we are looking at something a bit worse than the middle case, but not a lot worse. Which would mean we still (barely) have time to stave off the worst. Have I understood correctly?
[Response: Yes, you have understood correctly. The chief uncertainty is the details of the feedbacks. Nicely put! -eric]
Graham Jackson says
Re response on 19
If the scientific opinion is that we are not near the “tipping point”.Why is this not published.
Exagerated reporting will give Science a bad name.
We might be crying “wolf” too often and when the real emergency is upon us nobody (including the politicians) will believe us.
us
pete best says
The great disasters of human induced climat change appear to all be exaggerated in the press and pseudo scientific press according to realclimate.org and I have no reason to not believe them.
Take the BBC program Horizon which claims to be (well I think it does anyway) a promote a scientific view of certain issues. One such episode was on Global Dimming and one on the Thermohaline system in the north atlantic. The first program proposed that as human kind has mopped up its pollution so global warming may increase as pollution seemed to be preventing a certain amount of sunlight from reaching the ground. Realclimate has commented on the episode I believe and has questioned some of its validity.
The other episode of interest was concerning the thermohaline system in the north atlantic which is appearing to weaken. This weakening has been attributed to increased melt water from Greenland I believe along with increased river run off from rivers in Russia. Again this has been commented on by real climate and there appears to be no real evidence of thermohaline slowdown being caused by these means. One other factor here is increased evaporation at the equator which has increased the salanity of tropical waters along with increased percipitation at the poles seems to be making the thermohaline system move faster which in turn carries move heat to the poles and hence increases polar ice melting and hence possibly a greater chance of slowdown of the thermohaline system. This appears in the book the Weather Makers as fact from the Woods Holes Oceanographic institute. http://www.whoi.edu/institutes/occi/currenttopics/abruptclimate_15misconceptions.html
The only thing that appears to be really true about climate change and climate scientists is that increased CO2 will arm the world but what the implications are of this warming scientists leave to other people.
Ben Coombes says
Re #11,
Pete, where did you hear that “By 2040 the Amazon will be drying out and has the potential by 2100 to have released some 35 Gigatonnes of additional CO2 into th atmosphere pushing up ppm levels to around 1000 ppm”?
Sounds intriguing but I have not been able to track it down.
Cheers
[Response: There is a Hadley Centre / HadCM3 study on this, using a version of the GCM with vegetation model included – William]
pete best says
Re #22
It is listed in the Book the Weather Makers by Tim Flannery (a recent release) where he details on page 196 the possibility of the Amazon rain forest drying out and returning to the atmosphere huge quantities of CO2 released from the soil. It was also detailed some years ago here in the UK on a Equinox program on Channel 4.
This is essentially a tipping point event or a GAIA revenge event you might say but in scientific terms it is a +ve feedback event of which there appears to be many such possibilities including the following:
Thermohaline Slowdown – http://www.whoi.edu/institutes/occi/currenttopics/abruptclimate_15misconceptions.html#ocean_1
Massive Methane release as detailed in the greatest mass extinction event of all time. Some 5C of warming appears to make the Oceans less dense allowing methane calthrates to be released from the Ocean floor en masse.
Steve Bloom says
RE #23: William, you’re saying the model run actually showed the potential for 1,000 ppm? With what underlying number? In any case this sounds like kind of an important result, unless it’s somehow deemed very unlikely. Could we please have a link to the study?
[Response: I don’t know about the 1,000 but Alistair has provided a link to the Nature paper – William]
pat neuman says
Climate projections need to account for enhanced warming due to global
warming feedbacks as discussed by James Zachos, professor of Earth
sciences at the University of California, Santa Cruz at the annual
meeting of the American Association for the Advancement of Science (AAAS)
in St. Louis in Feb, 2006.
Excerpts from: Ancient Climate Studies Suggest Earth On Fast Track To
Global Warming, Santa Cruz CA (SPX), Feb 16, 2006
Human activities are releasing greenhouse gases more than 30 times
faster than the rate of emissions that triggered a period of extreme
global warming in the Earth’s past, according to an expert on
ancient climates.
Zachos … is a leading expert on the episode of global warming
known as the Paleocene-Eocene Thermal Maximum (PETM), when global
temperatures shot up by 5 degrees Celsius (9 degrees Fahrenheit).
This abrupt shift in the Earth’s climate took place 55 million years
ago at the end of the Paleocene epoch as the result of a massive
release of carbon into the atmosphere in the form of two greenhouse
gases: methane and carbon dioxide.
“The rate at which the ocean is absorbing carbon will soon
decrease,” Zachos said.
Higher ocean temperatures could also slowly release massive
quantities of methane that now lie frozen in marine deposits. A
greenhouse gas 20 times more potent than carbon dioxide, methane in
the atmosphere would accelerate global warming even further.
“Records of past climate change show that change starts slowly and
then accelerates,” he said. “The system crosses some kind of
threshold.”
…
http://www.terradaily.com/reports/Ancient_Climate_Studies_Suggest_Earth_On_Fast_Track_To_Global_Warming.html
Eli Rabett says
IHMO the Annan/Hargreaves estimate is correct UNLESS the system wanders into an area where the response is very non-linear (for want of a better word). For example, something like a major release of methane from clathrates, collapse of an Antarctic ice shelf, etc.
The method they use is perfectly valid for an interpolation, but much more risky for an extrapolation. They are extrapolating. Bridge builders who do this risk embarrassment.
[Response: But both of your examples are outside the ‘construct’ that is being used (i.e. fixed boundary conditions). You are correct in thinking that this is not a prediction though, rather it is a ‘best guess’ of a particular system property that has some relevance for the future. -gavin]
pete best says
It would seem to me that unless the earth operates systemically (GAIA like) then climate change will not be as bad for life as it could have been.
Alastair McDonald says
Re 23, 24 and others
Below is the first paper to raise the problem of a biotic feedback. However, since Annan is “implicitly” ignoring feedbacks it seems unlikely that he is including these results in his analysis and so it is deeply flawed! There are many other feedbacks, most notably the the ice-albedo effect of Arctic sea ice, which have already passed their tipping points. However, neither the scientists, whose incompetence it would illustrate, nor the politicians, whose impotence it would expose, wish to admit that. As the paper referenced by Pielke Jnr points out, “[The] emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows ‘the same’ concept to accommodate tacitly different local meanings.”
Hank posted a second quote that he could not understand. “The varying importance of particular dimensions of knowledge for different social groups may allow cohesion to be sustained amidst pluralism, and universality to coexist with cultural distinctiveness”. What it means is that the scientists’ agenda is faith in their models, whereas the politicians agenda is to get re-elected. Their re-election is impossible if they act to limit carbon emissions. And we all, who post here, do not want to believe that the situation is as desperate as it is, and so we are happy to accept the reassurances from James, Gavin, and William.
But the models are fatally flawed. Global warming is happening faster that the models predict. The models failed to reproduce the lapse rate, so the MSUs and radiosondes were blamed. Now, despite repeated attempts, it is still not possible to model rapid climate change, especially that at the end of the Younger Dryas. Even the start of the YD is only reproducible if unreasonable amounts of fresh water are included in the models. Moreover, it now seems unlikely that Lake Agaziss did flood the North Atlantic triggering the YD.
I’ve already pointed out on RealClimate that the error in the models is in their treatment of radiation. Kirchhoff’s law only applies when there is thermodynamic equilibrium, and that does not apply to the air near the surface of the Earth when the thermal radiation is constantly changing, driven by a diurnal solar cycle. Moreover the radiation from the greenhouse gases is calculated using Planck’s function for blackbody radiation, but greenhouse gas molecules emit lines, not contiuous radiation. These two errors nearly cancel each other out, and so it is not obvious that the models are in fact wrong. However, a schema for handling radiation, which dates from before the quantum mechanical operation of triatomic molecules was even considered, is inevitabley flawed.
Anyway here is the paper asked for:
Cheers, Alastair.
Letters to Nature
Nature 408, 184-187 (9 November 2000) | doi:10.1038/35041539
Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model
Peter M. Cox1, Richard A. Betts1, Chris D. Jones1, Steven A. Spall1 and Ian J. Totterdell2
Abstract
The continued increase in the atmospheric concentration of carbon dioxide due to anthropogenic emissions is predicted to lead to significant changes in climate1. About half of the current emissions are being absorbed by the ocean and by land ecosystems2, but this absorption is sensitive to climate3, 4 as well as to atmospheric carbon dioxide concentrations5, creating a feedback loop. General circulation models have generally excluded the feedback between climate and the biosphere, using static vegetation distributions and CO2 concentrations from simple carbon-cycle models that do not include climate change6. Here we present results from a fully coupled, three-dimensional carbonâ??climate model, indicating that carbon-cycle feedbacks could significantly accelerate climate change over the twenty-first century. We find that under a ‘business as usual’ scenario, the terrestrial biosphere acts as an overall carbon sink until about 2050, but turns into a source thereafter. By 2100, the ocean uptake rate of 5 Gt C yr-1 is balanced by the terrestrial carbon source, and atmospheric CO2 concentrations are 250 p.p.m.v. higher in our fully coupled simulation than in uncoupled carbon models2, resulting in a global-mean warming of 5.5 K, as compared to 4 K without the carbon-cycle feedback.
http://www.nature.com/nature/journal/v408/n6809/abs/408184a0.html;jsessionid=6FD9E0EEA1AB92BC821D10607313B78F
David Donovan says
RE 29
“I’ve already pointed out on RealClimate that the error in the models is in their treatment of radiation. Kirchhoff’s law only applies when there is thermodynamic equilibrium, and that does not apply to the air near the surface of the Earth when the thermal radiation is constantly changing, driven by a diurnal solar cycle. Moreover the radiation from the greenhouse gases is calculated using Planck’s function for blackbody radiation, but greenhouse gas molecules emit lines, not contiuous radiation. These two errors nearly cancel each other out, and so it is not obvious that the models are in fact wrong. However, a schema for handling radiation, which dates from before the quantum mechanical operation of triatomic molecules was even considered, is inevitabley flawed.”
Really do not know what you are on about here. Usually in large scale models so-called band approximations are made to simplify and speed up the radiation calculations. However these approaches all can be derivied starting from a quantum mechanical view of gasseous absorption and emmission. It is also standard practice to evaluate the accuracy of the band models (or correlated k type approaches) against
detailed line-by-line data and calculations. See the relevent part of a text such as “An Introduction to Atmospheric Radiation, 2nd edition, K.N. Liou, Academic press” or any other text book on the subject.
Ian Castles says
Gavin, a minor but non-trivial point first. In your sixth last line, you’ve put the Annan and Hargreaves (A&H) estimate of the lower bound of the 95% confidence limits for climate sensitivity at 1.9ºC. On my reading, it is 1.7ºC. This figure is given three times in the paper (twice on p. 10 and again in Figure 1 at the end). Am I missing something?
[Response: Whoops, my mistake. It is 1.7 and I’ve adjusted the post accordingly – gavin]
The more important point I want to make is that A&H has only just been published, and the cut-off date for papers for AR4 has passed. It is unfortunate that this paper, whatever its scientific merit, cannot be considered by the intergovernmental panel until the succeeding assessment, which is due to be published around 2013. As you rightly say, this paper confirms what scientists ‘basically thought all along’ (or at least for the past thirty years). But since the last assessment report in 2001 there’s been a change: things HAVEN’T remained the same.
The change since 2001 was stressed by Sir Nicholas Stern, Head of the UK’s Stern Review of the Economics of Climate Change in his keynote address at Oxford on 31 January (in the week before A&H was accepted by GRL). According to Sir Nicholas, ‘Scientists have been refining their assessment of the probable degree of warming for a given level of carbon dioxide in the atmosphere’, and ‘ranges from 2004 estimates are substantially above those from 2001 – science is telling us that the warming effect is greater than we had previously thought.’
The calculations of prospective warming in the OXONIA lecture and the accompanying discussion papers are based on the new climate sensitivity estimates by Murphy et al which were published in Nature, 12 August 2004, vol. 430, pps. 768-72. The 90% probability range in Murphy et al is 2.4 – 5.4ºC – i.e., 0.9ºC higher at both ends of the range than the ‘canonical 1.5 – 4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report.’
Sir Nicholas Stern stressed that things have changed, while A&H now argue that they have remained the same. Unfortunately, AR4 comes in between.
The issue of Nature which carried the Murphy et al paper included a related article by Thomas Stocker, and a paper in the issue of Science published on the following day (Kerr, “Climate Change: Three Degrees of Consensus”, vol. 305: 932-934) reported Gerald Meehl as polling the 14 models expected to be included in the IPCC assessment and finding that a span of 2.6 – 4.0ºC in the 8 model results then available (Gerald Meehl and Thomas Stocker are the coordinating lead authors of the “Global Climate Projections” chapter of the AR4 scientific report, and James Murphy is a lead author of the same chapter).
It is of interest to relate the range cited by Meehl to the profile of likelihood functions for climate sensitivity charted in Figure 1 appended to A & H. The lower and upper points of the IPCC modellers’ range are, respectively, 0.9ºC above and 0.9ºC below A&H’s 95% confidence range of 1.7 to 4.9ºC. (as shown in the red solid line representing “combination of the three constraints”). On the face of it the range of the IPCC models is centrally within the A&H 90% range, but visual inspection of Figure 1 suggests that A&H find that there is about a 45% probability that climate sensitivity is below the lower end of the range quoted by Meehl in August 2004 (Of course the IPCC draft report, which I have not seen, may include models with lower sensitivity than 2.6ºC).
Note also that A & H qualify their findings as represented by the red solid line, as follows:
“In fact our implied claim that climate sensitivity has as much as a 5% chance of exceeding 4.5ºC is not a position that we would care to defend with any vigour. since even if it is hard to rule it out, we are unaware of any significant evidence in favour of such a high value.”
[Response: Well, actually the final cut-off date for ‘in press’ was at the end of March, so A+H does count – whether this has influenced the text is unknown at this point. I don’t think that Stern has any particular inside track to the IPCC draft and it shouldn’t be presumed that he speaks with any authority on the subject. The point you make about the Murphy et al (or Frame et al or Forest et al or Stainforth et al papers) is exactly the opposite of the point I was trying to make. Specifically, just because one method does not constrain the high end sensitivities that does not imply that there is no constraint. The exisiting (stronger) constraints (based on the LGM for instance) are not overridden by a method that is not as strong. A+H’s conclusion that there is no positive evidence for extremely high sensitivities is completely correct. -gavin]
wayne davidson says
#29, I totally agree that the models appear to be quite conservative in their long term projections. On particular case in point was this past winters extremely warm periods, in fact as I can recall Michael Mann write, about North Americas sea of red temperature anomalies of January as something which is supposed to happen “20 years” from now. I also must point out the failure of Seasonal temperature Forecasts, quite glaring again for this past winter. Allistair suggestions should be taken quite seriously. I must also announce again, like a broken record, that running averages for March 2006 Canadian high Arctic are totally warm: +5 to 10 degrees C warmer, more again like a Polar model projection 20 years from now due to Polar Amplification as on a previous post on RC.
Hank Roberts says
Attempting to oversimplify (grin)
— the models state specific conditions, and give us trends and a sensitivity estimate that holds so long as only those specific conditions have any effect.
— The paleoclimate history gives us trends; as they come up with higher resolution info (ice cores, mud cores, etc) they give us sudden dramatic changes as unexplained fact.
— The theorists and field researchers give us every now and then a new big fact (such as the existence, stability, and likely evidence of past events involving sudden releases of methane, plankton feedbacks, solar flare episodes).
After a while, the new big fact is well enough documented to fold into tne next generation of models, but it shows up there as “curves A, B or C” — the likely trends depending on whether or not such a big event happens to happen (a large volcano, for example)
Big sudden events that occur as tipping points during slow trends aren’t in the models yet, although we can expect they must be in the details somewhere.
Someone knowledgeable beat up on that, will you? I’m trying to express the feeling as an ordinary reader that neither the modelers nor the ‘skeptics’ incorporate something everyone seems to actually believe — that big surprises do happen. The modelers rule them out to be able to create a model that can actually be run; the skeptics either believe the dice always roll in their favor or that nothing can go wrong … or something. And those who do expect the unexpected get called “alarmists.”
Alastair McDonald says
Re 33 (Hank again)
You are quite right. The ‘unexpected’ happened twice during the Younger Dryas, once at the start and once at the end. There is no evidence of a sudden changes in CO2 then, and it has just been proved from ice core sampling that there was not a sudden methane release at the end to cause that rapid warming. What is evident from the dust during the cool phase and lack of dust during the warm phase was that the water vapour content of the air suddenly changed. H2O is the greenhouse gas which causes rapid climate change not carbon dioxide or methane.
If carbon dioxide melts the Arctic sea-ice the change in water vapour will be catastrophic, because it produces a positve feedback.
James Annan’s sensitivity is nonsencse if it “implicitly” ignores other feedbacks.
Cheers, Alastair.
[Response: It is not ‘James Annan’s sensitivity’ – it is the universally accepted sensitivity, and it does include Arctic sea ice and water vapour feedbacks. It does not include ice sheet, dynamic ocean, or vegetation feedbacks. – gavin]
[Response: I suspect another common confusion here: the abrupt glacial climate events (you mention the Younger Dryas, but there’s also the Dansgaard-Oeschger events and Heinrich events) are probably not big changes in global mean temperature, and therefore do not need to be forced by any global mean forcing like CO2, nor tell us anything about the climate sensitivity to such a global forcing. As far as we understand them, they are due to a redistribution of heat within the earth system, very likely due to ocean circulation changes. They change the heat transport between hemispheres and cause a kind of “see-saw”: the south cools as the north heats up and vice versa, with little effect on the global mean. You can’t compare that to global warming. -stefan]
Hank Roberts says
Alastair, Gavin’s making more sense to me here, I think because he’s giving more explicit detail on the models I’m asking about.
I’m familiar as a lay reader with humidity-plus-methane arguments for the PETM, for example here (not at all sure these are more than opinions, since this was published as a letter not an article):
http://www.nature.com/nature/journal/v432/n7016/abs/nature03115.html;jsessionid=D73B299BB40A74A2761F5C95CBF13113
If you’ve published (or can refer to a publication) that tries to pull together models and fieldwork that gives support for this I’d welcome a pointer because I’m trying to understand how people get to agreement about these disparate arguments.
Gavin’s explanation of sensitivity above is the first clear explanation I’ve ever seen, making the point about what is — and is not — included in the many attempts to come up with a sensitivity estimate.
I’m curious how and when (in a ‘history or sociology’ way) the unexpected events are taken into consideration in public discussion by the actual working scientists — specifically.
I realize the modelers are experts within the limits they have set out and are wise not to talk outside of them — and the the field and theory people likewise are experts within the areas of their own work.
Maybe the current RC discussions are as good as it gets, so far.
Anonymous Coward says
I think what Alastair is alluding to is the fact that, say by 2050 when the arctic ocean will conceivably be ice-free in the summer, the atmosphere will have a much higher relative humidity than it has currently because of the open air=water interface, so this will have a magnifying effect beyond just the feedback from increased CO2.
What i am interested to know is if the models have been run with this scenario and what affect this has has on Northern Hemisphere climate, since in mid-summer the artcic can receive nearly as much insolation as the tropics.
Hank Roberts says
Anony, Gavin answered your question before you asked it, above:
“[Response: … sensitivity … does include … water vapour feedbacks.]
Alastair McDonald says
Re 37
It is not really true that Gavin has answered that question. He said that sensitivity includes water vapour and arctic sea ice, but I suspect that the changes in sea ice in the models are much less than we are seeing in practice. IIRC the sea ice extent last summer was similar to that modeled for 2040. Dr Coward’s question was “Has an ice free arctic ocean been modelled?” That is a question I would also like answered, especially as when I last heard, it had not been.
Gavin has said that the greenhouse effect of CO2 increases with the logarithm of the concentration. That is also the formula used for water vapour. However, they are both wrong. They are derived using the current radiation theory which is incorrect. The radiation is not absorbed throughout the full height of the atmosphere, passing through it rather like an electric current through a conductor.
It is absorbed mainly in the bottom 30 metres. If the concentration doubles then the same amout of radiation is absorbed in the bottom 15 m. In other words the forcing is linear not logarithmic. This has a profound effect on the way water vapour behaves, because it increases sub-exponentialy with temperature. If the forcing was logarithmic, then that would cancel out the exponential effect. But since it is linear, if the temperature rises then the water vapour will run away, because the higher temperature leads to more water vapour which causes more greenhouse warming which leads to higher temperatures. Eventually the clouds cut off the source of heat and the system stabilises. This can be seen on a small scale in the tropics. On a large scale it can cause El Ninos, and on an even lrger scale in abrupt climate change.
In the paper you cited, it showed how the water vapour was giving the extra boost to the temperature, but they could not explain it because their models were using the logarithmic relationship rather than the linear one. In other words that paper is further evidence that my ideas are correct. However I expect I will be told yet again that since I make such extravagent claims, it is no wonder no-one believes me :-(
Cheers, Alastair.
[Response: Many of the IPCC AR4 runs produce ice free (in the summer) conditions by 2100 (depending on the scenarios) so there is nothing intrinsic to the GCMs that do not allow this. You are fundementally wrong in your descriptions of the radiation as has been pointed out frequently here and elsewhere. -gavin]
Alastair McDonald says
Re 34 Stefan, in your response you suggest that I am confused, but I am the one who has an answer to rapid climate change. Compare that with the conventional view, which is that the bursting of the dam holding back Lake Agassiz stopped the THC and so triggered the Younger Dryas. But as you point out there were other rapid climate events occurring in Dansgard Oescher cyles. Were all of these triggered by Lake Agassiz dam bursts? Moreover, were the warming events at the end of those cycles caused by the Lake refilling! No, I don’t think it is me who is confused. If you will pardon a pun, I don’t think the current model for abrupt climate change holds water! Why, even its inventor, Wally Broecker, now says “I apologize for my previous sins.” http://www.sciencemag.org/cgi/content/summary/297/5590/2202
The answer lies in a short paper by Gildor and Tziperman (see below. There is also an informal description of it here http://www.poptel.org.uk/nuj/mike/acc/#ACC_Tziperman )They too are slightly confused, trying to blame the 100 kyr cycles on sea ice. But their mechanism works beautifully for rapid climate change in the North Atlantic. The abrupt changes seen in the Greenland ice cores are due to sea-ice changes and the slower changes are the growth or retreat of continental ice sheets. What G&T are missing is the linear effect of water vapour accelerating the ice albedo effect of change in size of the sea ice sheets. They also seem to be unaware of that the Arctic sea ice still exists and its abrupt collapse will trigger the next rapid warming!
Cheers, Alastair.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
ISSN: 1364-503X (Paper) 1471-2962 (Online)
Issue: Volume 361, Number 1810 / September 15, 2003
Pages: 1935 – 1944
DOI: 10.1098/rsta.2003.1244
URL: Linking Options
Sea-ice switches and abrupt climate change
Hezi Gildor and Eli Tziperman
Abstract:
We propose that past abrupt climate changes were probably a result of rapid and extensive variations in sea-ice cover. We explain why this seems a perhaps more likely explanation than a purely thermohaline circulation mechanism. We emphasize that because of the significant influence of sea ice on the climate system, it seems that high priority should be given to developing ways for reconstructing high-resolution (in space and time) sea-ice extent for past climate-change events. If proxy data can confirm that sea ice was indeed the major player in past abrupt climate-change events, it seems less likely that such dramatic abrupt changes will occur due to global warming, when extensive sea-ice cover will not be present.
[Response: Alastair, I did not mean to say you were confused, but often people mix up global mean changes with the abrupt regional changes seen e.g. in the Greenland ice cores. You do seem not quite up to date with current thinking on abrupt climate changes (now I’m referring to your polemic question “Were all of these triggered by Lake Agassiz dam bursts?” etc.), and you’re quoting Broecker out of context here, which misleads your readers. Perhaps have a look at my review article in Nature?
Concerning sea ice: it does give a strong positive feedback which is included in all climate models. It plays an important amplifying role in the our own theory of the Dansgaard-Oeschger and Heinrich events. But remember it is a feedback, sea ice does not just start changing by itself, so sea-ice alone can never be a full explanation of these abrupt climate events. -stefan]
Hans Erren says
If the high climate sensitivity effect of the ice ages is a result of the hysteresis effect as proposed by Oerlemans and Van den Dool (1978), then the present observed sensitivity of 1K/2xCO2 cannot be much higher.
J. Oerlemans and H.M. Van Den Dool. 1978: Energy Balance Climate Models: Stability Experiments with a Refined Albedo and Updated Coefficients for Infrared Emission. Journal of the Atmospheric Sciences: Vol. 35, No. 3, pp. 371-381.
The analogy with PETM is also not correct because ocean bottomwater temperature was about 20 degrees higher than present, and atlantic ocean circulation was going across Panama into the pacific.
[Response: ‘Observed’? Hardly. An almost 30 year old paper though – that’s pretty good – I’ll look it up next time I’m near the library… – gavin]
Hans Erren says
No need to go to the library it’s available online
here now!
Eli Rabett says
WRT Gavin’s response to 27, that is exactly my point, the confidence one can have in the Annan and Hargreaves limits decreases as you get outside of the limits set by recent climate parameters, not because their work is wrong, but because there may be other significant factors which become important. I think you have to explicitly state/understand this.
WRT Alister McDonald’s tosh, he would have to show me that the distribution of population among quantum states was not thermal in order to justify what he says. This is not the case at any location below the stratopause.
Hank Roberts says
I’d welcome comments from Eli or Gavin on the content at the website Alastair points to:
http://www.poptel.org.uk/nuj/mike/acc/#ACC_Tziperman
Whether or not the content there supports Alastair’s solution, the author does try to explain abrupt events to the general reader, and is writing at a level many people can follow. I wish Holderness were participating here and helping simplify discussion (assuming the facts are right!)
Title is:
Abrupt Climate Change: evidence, mechanisms and implications
A report for the Royal Society and the Association of British Science Writers
by Mike Holderness, March 2003
Ian Castles says
Gavin, Thanks for your comment on my 31. I’m glad to learn that I was wrong to conclude that A&H was too late for AR4, and that this paper “does count” (though it’s unknown whether it has influenced the text of the Report). I’m also glad to know that you believe the A&H conclusion that there is no positive evidence for extremely high sensitivities is “completely correct”: I certainly hadn’t intended to suggest otherwise.
I don’t understand your comment that the point I made about the Murphy et al paper is exactly the opposite of yours. My only point on the paper was that the estimates of climate sensitivity therein had been relied upon in the Stern discussion papers. That’s a simple statement of fact.
Stern’s temperature projections were presented as having been “taken straight from a combination of the IPCC and the Hadley Centre.” I didn’t presume that Sir Nicholas spoke with any other authority, and I certainly didn’t endorse his alarmist conclusion, presented as a certainty, that under “business-as-usual … we can see that we are headed for some pretty unpleasant increases of temperature [of 4 or 5ºC].”
Curiously, the Stern documents define “business-as-usual” by reference to the IPCC A2 scenario, and calculate the required reductions in emissions against this benchmark. This is wrong for two reasons. First, the IPCC SRES (2000) states explicitly that “There is no business-as-usual scenario”(p. 27); and secondly, the population assumptions underlying A2 are totally unrealistic: the scenario assumes an end-century global population of 15.1 billion.
In 2001 the International Institute for Applied Systems Analysis (IIASA), which produced the A2 population projectiion put the 95% confidence limits for the world’s end-century population at 4.3 – 14.4 billion. These estimates are published on IIASA’s website. Thus the authors of the population projection themselves consider the A2 population numbers to be highly unlikely. It is surprising that the IPCC considered that this scenario was suitable for use in AR4, but didn’t see the need to commission an equally (or more) likely low scenario with an end-century global population of 4.3 billion (40% lower than the lowest SRES scenario).
[Response: Why do you find this strange? Do you think climate modellers have an infinite amount of time to spend on doing scenarios? An improbable event with very bad consequences (the high population scenario) is much more relevant for policy decisions than an improbable event in which impacts are less severe. The high end tells policy makers the risk they are taking on. The low end only says that if we’re very lucky part of the problem could go away by itself, and we’ll all be breathing a sigh of relief and dancing in the streets. Put it this way: If you’re designing a nuclear reactor containment vessel, do you design it for the 5% chance that the pressure in a meltdown is 100 atmospheres or the 5% chance that the pressure in a meltdown is 10 atmospheres? As far as policy goes, there’s very little usable information content in the low end. –raypierre]
Alastair McDonald says
Re 42 Eli wrote
“WRT Alister McDonald’s tosh, he would have to show me that the distribution of population among quantum states was not thermal in order to justify what he says. This is not the case at any location below the stratopause. ”
Can I just first say that we are discussing Earth Science here. Earth Science is fractal, by which I mean for every rule there is an exception, and for every exception threre is yet another exceoption. This recursive feature makes it very difficult to give an exhaustive explanation, and a simple one tends to be full of holes. Here is a simple explanation of how radiation in the atmosphere really works. I have tried to make it understandable at the expense of completeness.
First, it is only atoms in solids and liquids which have the electronic quantum states that are distributed thermally. Greenhouse gas quantum states are molecular vibrations whose distribution depends mainly on the frequency of molecular collisions which is controlled by the gas pressure not its temperature.
Above the stratopause, the air is too thin for the collisons with greenhouse molecules to have a dominant effect, and that region is not considered to be in local thermodynamic equilibrium (LTE) ie it is non-LTE. (I think this is what you were referring to, Eli, with your mention of the stratopause.)
Below the stratopause or region of non-LTE, at any two adjacent layers of the atmosphere then the pressure will be almost equal. So the radiation emitted by one layer towards the second will equal the radiation emitted by the second towards the first. In this case there is a radiative equilibrium known as LTE. (Originally this state was named ‘radiative equilibrium’ by Karl Schwarzschild when he first proposed it. At that time it was though that gases radiated as blackbodies and, I presume, that was why it was renamed ‘local thermodynamic equilibrium’.
The surface of the Earth radiates as a blackbody at its temperature which is continually changing because it is being heated by the sun, or it is cooling during the night. But the radiation fron the air above the surface is fixed by the pressure of the air which remains fairly constant at around one atmosphere. Thus the radiation entering the atmosphere from the surface of the Earth is not equal to that radiatied by the air to the surface. In other words, there is a layer of the atmosphere near the surface which is also non-LTE but in a different way from that above the stratopause.
QED Eli.
Just to expand on that a little – the distribution of population among the quantum states of the greenhouse gas molecules is actually set in two ways; by collisons with air molecules (ie both greenhouse gas molecules and non-greenhouse gas molecules) and by excitation as a result of absorption in that greenhouse gases’ bands. This radiation can come from the surface of the Earth, or from the Sun which can be ignored here, or from other greenhouse gas molecules of the same species. In the mid atmosphere, the excitiation from collisions and radiation from other greenhouse gas molecules is matched by the emission from the greenhouse gas molecules and their de-excitation by collisions, LTE.
The collision of a greenhouse gas molecule with a non-greenhouse gas molecule can result in the greenhouse gas molecule becoming excited and the non-greenhouse gas losing kinetic energy. Collison of an excited greenhouse gas molecule will normally result in the deactivation of the greenhouse gas molecule and an increase in the kinetic energy of the non-greenhouse gas molecule. This change in kinetic energy is called thermalisation and it changes the temperature of the air. Thus when the Earth is radiating with a greater intensity than the back radiation from the air, then the excess radiation will be absorbed by the air molecules, and the air will warm. At night, when the surface is cooler than the back radiation brightness temperature, then the air will cool until it reaches the surface temperature. But note that the air does not cool at a rate based on it temperature. It cools at a rate based on the difference in temperature between the surface and the gas brightness temperature and on the greenhouse gas density. In other words the heat flows from the air to the surface at a rate determined by the number of molecules of greenhouse gas and their emission rate. Increasing the concentration of a greenhouse gas will increase the back radiation linearly, not logarithmetically.
This is easy to calculate for CO2 but not for H2O which may well be condensing (forming dew or fost!) or evaporating and so its concentration is also changing.
So, I’ll stop here before it gets too complicated.
Cheers, Alastair.
[Response: Your idea of what LTE means is fundamentally flawed- it has nothing to do with whether a a volume of air is radiatively cooling or not – see http://amsglossary.allenpress.com/glossary/search?id=local-thermodynamic-equilibrium1 . For further discussion of your ideas, please take it to sci.env – gavin]
Hank Roberts says
> end-century population at 4.3 … billion [low end estimate]
Today’s population is about 6.5 billion — and half the people in most of the world are under age 18!
Is that low number a feedback result of possible climate change?
Ian Castles says
No Hank, it’s not, and I can assure you that IIASA’s population experts are fully aware of the age distribution of the world’s population. This is from evidence given by Professor Nebojsa Nakicenovic, Coordinating Lead Author of the IPCC, to the UK House of Lords Committee that inquired into The Economics of Climate Change on 8 March 2005:
‘The scenarios reported in [the SRES] were done with knowledge about the future population roughly state of the art of the year 1996… In the early 1990s it was felt that the most likely or medium population projections for the world [in 2100] was about 12 billion people, the top range perhaps 18 or so. By the time we were writing this (SRES) report the medium was 10 billion people. The highest range about 15 billion, the lowest about six billion people… The two main organisations that do population projections for the world are IIASA [and] … the United Nations. These scenarios are done very, very seriously and also reviewed with many, many groups and, as it happens, the two projections do tend to coincide. Ever since the (SRES) report was published, the population projections have been revised [downward]… [T]he medium population is no longer 10 billion but about eight billion people; the higher is now in the range of about 12 and the lower is in the range of four instead of six.’
Thus the projection characterised by the Stern Review as ‘business-as-usual’ (15 billion) is 25% higher than the high estimate given by Professor Nakicenovic – and Nakicenovic’s low estimate is 43% below the lowest projection used in the IPCC scenarios. This doesn’t mean that the 4 billion is assessed by the experts as having a high probability – but it is more probable than the IPCC projection that Stern has adopted as ‘business-as-usual’.
[Response: The reduction in mid-range population estimates is great news, because even at the mid level projections, the prospects for keeping CO2 from doubling looked pretty bleak. Now the problem begins to look more tractable, meaning there’s all the more reason for governments to buckle down and take serious action. I hope you’re carrying this good news to the Australian government. –raypierre]
Ian Castles says
Thanks Ray. On your comment on 44, If you agree that a 15 billion end-century population is highly unlikely, don’t you think that the IPCC might have said so? Then the British Government’s chief adviser on the economics of climate change may not have made the mistake of asserting that urgent action was necessary to avoid what he portrayed as a ‘business-as-usual’ outcome. What if the assessed probability of a 15 billion population was 1 in 1000? Or 1 in 1000000.
[Response: And I’m glad you agree that the rosy low-end scenario is unlikely. –raypierre]
Re your comment on 45, I don’t know why you think the prospects for keeping CO2 from doubling are bleak. On the limited information available to me, they seem quite promising – but it certainly would have been helpful in making judgments on this point if the IPCC had modelled a low-medium population projection (as in the A1 and B1 scenarios) which made more moderate assumptions about growth in output and energy use. The Australian Government proposed that the IPCC consider such a scenario in its submissions to the scoping meetings in April and September 2003, but the IPCC decided in its wisdom that the SRES scenarios as they stood were suitable for use in AR4. I hadan’t realised that an element in the IPCC’s failure to take up the suggestion may have been that climate scientists had more to do with their time.
[Response: I wasn’t part of the group that made the decisions on what scenarios to use, but it is indeed true that running simulations of scenarios is a great drag on the climate science community, so there is every reason to try to focus on the ones that will be most informative. To be sure, the SRES scenarios might not be optimal, but tweaking the way scenarios are constructed is really pretty inconsequential compared to the physical uncertainties climate scientists are trying to grapple with. The whole issue of the way scenarios are chosen is vastly overblown, as Gavin nicely explained in his post here on Lawson’s anti-IPCC diatribe ]
Hugh says
Hmmm…Ian
So that’s a plausible 4.3billion low end population ‘projection’ is it?
And a plausible 14.4 billion high end ‘projection’?
Would you acknowledge that either is probable, or do you think that the truth may lie somewhere inbetween?
Considering that the calculations that resulted in these two figures have been “done very, very seriously and also reviewed with many, many groups” there appears to be very a similar spread between the poles (with significant room for many possible outcomes) as I’ve seen in some other peer reviewed work I’ve glanced through. Only that work was using another metric that relied on actual physical principles and not just on an economists’ trust that future populations may develop in such a globally PC manner (which I believe is a concept you have previously derided) that those in the developing countries no longer need to produce progeny in large numbers simply as a means of providing some sort of security for themselves…now what was that publication called again?
[Response: It would be a good idea to distinguish between demographics and economics. Demographics (population modelling) is a somewhat more tractable problem than economics. Nonetheless, as you note, it’s an imprecise science. I don’t hold this against demographers. There is simply no very reliable way to project fertility, and in particular the rapidity of the “demographic transition” whereby people tend to opt for lower fertility as the societies they participate in get richer. The unexpected speed of the demographic transition is one of the reasons for the downward shift in population projections — though against that one must also keep in mind that the US population is growing unusually rapidly for a developed country. Given the high per capita carbon emissions in the US, this is not a good thing. There are many possible futures, and IPCC is not in the business of predicting “the” future, particularly given that the future will depend in large measure on the actions taken in response to the information provided by IPCC. IPCC only provides a conditional picture of what various futures might look like. It’s the ironic fate of prophets that if they’re listened to, and disaster is averted as a result, they rarely get a pat on the back. Instead, the reaction tends to be, “See, it was all alarmism; the disaster never happened.” –raypierre]
Ian Castles says
Would I acknowledge that either is probable, or do I think that the truth may lie somewhere in between? Of course I think that the truth will probably lie somewhere in between: that’s the whole point of defining a 90% or 95% probability range. I’m not a demographer, and on the probabilities of different future global populations I’m ready to accept the findings of the two leading producers of demographic projections.
I wasn’t part of the group that decided on the scenarios either, and I certainly wouldn’t have been in favour of producing as many as 35 if I had been. As many economists have pointed out, the scenarios should have been designed so as to provide a more transparent connection between driving forces and emissions outcomes.
I find it ironic that on a thread which is discussing a new paper on the modelling of one of the main sources of uncertainty in climate projections (climate sensitivity), I am being criticised for trying to bring some rigour into the other main source of uncertainty: the future profile of emissions. In the end the improvement of climate projections depends upon reducing both sources of uncertainty and arriving at joint probability profiles.
In my opinion, some previous studies by scientists to estimate joint probabilities have misinterpreted the SRES. For example, Wigley and Raper (Science, 2001, 293: 452) charted the ‘frequency of occurrence of different 1990 to 2100 radiative forcings under the SRES emissions scenarios.’ But the conclusions on probabilities derived from this exercise are negated by the following explanation from 15 SRES authors:
‘The fact that 17 out of the 40 SREES scenarios explore alternative technological development pathways under a high growth … scenario family A1 does not constitute a statement that such scenaarios should be considered as more likely than others …, nor that a similar large number of technological bifurcation scenarios would not be possible in any of the other three scenario families’ (Nakicenovic et al, 2003, “IPCC SRES Revisited: A Response”, Energy & Environment, 14, 2&3, 2003: 195).
Thus Wigley and Raper erred, I believe, in weighting the scenarios equally. This effectively gave a much greater weight to the high emission (A1) scenarios compared with scenarios in the other families that the SRES authors elected not to explore in the same detail, and led to a significant upward bias in the probability distribution. It’s also relevant that the SRES authors specifically stated that the high growth scenarios were ‘Highly unlikely’ (ibid., p. 196).
[Response: I agree absolutely that a lot could be done to improve the process of generating scenarios. It’s far from optimal. Your efforts at doing this are certainly most welcome. My beef is with those (like Lawson, and even some of the Economist editorial staff) who make out of the imperfection of the scenario generation a full-blown indictment of the whole IPCC enterprise. I am not a fan of Wigley and Raper’s equal weighting of the scenarios. For that matter, I don’t think any weighting of the scenarios is appropriate, because there is really no reliable basis to assign probabilities (yes I know, I’m about to hear from the Bayesians again…). I think that in a case like this, aggregating the different forecasts is inappropriate, and throws away too much information. The information is in the full spread of the forecasts, and as Judge Posner notes, too much attention has been paid to the mid-range and not enough to the extremes of what is possible. –raypierre]